![]() Multi—granularity clothing changing—specific person re—identification (reid) method based on clothin
专利摘要:
The present disclosure provides a multi-granularity clothing changing— specific person re-identification (ReID) method based on a clothing desensitization network. To resolve a problem caused when a person changes clothing, a clothing desensitization network is used to learn a generalized appearance feature of the person, so that a model can identify the person without relying on the appearance features such as clothing color and texture. A loss is calculated based on a pre-generated part semantic segmentation image and a feature map to assist feature alignment. This can prevent similarity measurement on a background and a half of the body to a greatest extent, and eliminate negative optimization. According to the method in the present disclosure, a coarse-to-fine grained multi-level training method is used for training. In this way, more effective attribute information can be extracted compared with attribute information extracted based on a single global feature. The multi-granularity clothing changing—specific person ReID method based on a clothing desensitization network in the present disclosure achieves an excellent effect in relevant data sets for clothing changing—specific person RelD. 公开号:NL2028095A 申请号:NL2028095 申请日:2021-04-29 公开日:2021-08-11 发明作者:Cheng Zhiyong;Gao Zan;Wei Hongwei;Shu Minglei;Nie Liqiang;Wang Yinglong;Chen Da 申请人:Shandong Artificial Intelligence Inst; IPC主号:
专利说明:
-1- MULTI-GRANULARITY CLOTHING CHANGING-SPECIFIC PERSON RE-IDENTIFICATION (REID) METHOD BASED ON CLOTHING DESENSITIZATION NETWORK TECHNICAL FIELD The present disclosure relates to the fields of computer vision and deep learning, and specifically, to a multi-granularity clothing changing-specific person re- identification (RelD} method based on a clothing desensitization network. BACKGROUND Person RelD is a technology that uses a computer vision technology to determine whether a specific person is in an image or a video sequence captured by a video surveillance device. The technology is intended to realize cross-camera retrieval of a target person.in other words, it specifies a target person and retrieves images of the target person that are obtained by a plurality of devices. Person RelD can be combined with person detection and tracking technologies, and plays an important role in urban planning, intelligent monitoring, safety monitoring, and the like. With the development of deep learning and neural network technologies, person RelD has gained more attention in the field of computer vision. Based on a training loss, deep learning-based person RelD methods can be classified into a representation learning— based method and a metric learning-based method. As a frequently-used person RelD method, the representation learning-based method does not directly consider a similarity between images when training a network, but treats a person RelD task as a classification task or a verification task. in this kind of method, the last fully-connected (FC) layer of the network does not output a finally used image feature vector. Instead, an activation function Softmax is used to calculate a loss of representation learning. The penultimate FC layer is usually a feature vector layer. Specifically, the classification task is to use an identity {ID} or attribute of a person as a training label to train a model. Only one image is input each time. The verification task is to input two person-specific images to enable the network to learn whether the two images belong to a same person. Metric learning is widely used in the field of image retrieval. Different from the representation learning-based method, the metric learning—based method is intended to obtain a similarity between two images through network learning. With -2- respect to person RelD, a similarity between different images of a same person is greater than that between different images of different persons. Specifically, a mapping is defined, and an image is mapped from an original field to a feature field. Then a distance metric function is defined to calculate a distance between two feature vectors. Finally, a metric loss of the network is minimized to find an optimal mapping to minimize a distance between two images of a same person (positive sample pair) and maximize a distance between two images of different persons (negative sample pair). This mapping is a convolutional neural network (CNN) obtained through training. Frequently-used basic networks include AlexNet, VGG, GoogleNet, ResNet, and the like. SUMMARY To overcome disadvantages of the above technologies, the present disclosure provides a method to avoid an appearance change caused by changing clothing by a person in a traditional person RelD method. Specifically, the method uses a clothing desensitization method to learn a generalized appearance feature of the person to resolve a problem caused by changing clothing by the person, calculates a loss by using a pre-extracted part semantic segmentation image of the person and a feature map to complete feature alignment, and uses a multi-granularity training strategy to mine coarse-to-fined grained multi-level features of the person in a whole training process. The technical solution used in the present disclosure to resolve the technical problem thereof is as follows: A multi-granularity clothing changing—specific person RelD method based on a clothing desensitization network includes the following steps: a) performing semantic segmentation on a pre-obtained red green blue (RGB) image by using a semantic segmentation algorithm to obtain a part semantic segmentation image of the corresponding RGB image, and inputting the obtained semantic segmentation image to a network as a label to perform supervised training; b} inputting the pre-obtained RGB image to a CNN for processing, where a processed RGB image is represented by a 7x7 feature map, and there are 1024 feature channels; -3- c} performing a convolution operation on the feature map by using eight parallel convolutional layers with different weights; performing dimension reduction and feature extraction on the feature map in space and channels to obtain eight high- dimensional feature maps with different results; connecting corresponding channels of the eight high-dimensional feature maps by using a reshape function, to obtain 288 8-dimensional vectors to represent a single attribute of a person; normalizing a length of each vector by using a nonlinear squashing function and according to a formula sn | 8D PE Vr =_ rr en " 8D 8D . tewel [we] vs , where + represents the K™ 8-dimensional vector, k e[1, 288 ne . . . wy and | ’ |. multiplying the 288 8-dimensional vectors by a weight matrix 240 8D Vi ZW V according to a formula * “to obtain 1024 24-dimensional vectors 24D 248 V, w, ell ™ i ©, where , and represents real number space; and performing p20 coupled calculation on the 24-dimensional vector * according to a formula 288 24D n 24D C, == Su ’ Vi k=1 , and performing corresponding vector representation for a each person class based on an obtained calculation result 7’ ‚where a quantity of ‚Ny ne[LN] u, - classes of each person is NN, ‚and represents a coupling coefficient; d) inputting the part semantic segmentation image in the step a) and the feature map in the step b) to a part semantic segmentation module to perform feature alignment, where the part semantic segmentation module includes a deconvolutional layer, a normalization layer, an activation function, and a 1x1 convolutional layer; e} calculating a loss function L according to a formula L=AL, +AL +AL L Aly + A iplet 3 PW where ID represents a classification loss, -4- L, let : L art : IPC represents a triplet loss, PY" represents a loss of part semantic A A A segmentation, and 1, “2, and 3 represent weights; and f) optimizing a deep learning model by using the loss function L , to perform feature extraction on the person, retrieving a given image of the person in a test set in an optimized deep learning model to obtain other images of the person, and returning a sorted list. In the step a), a dual attention network (DANet) model is used as a segmentation algorithm model, and the DANet model is pre-trained on a COCO Densepose data set. Further, the CNN in the step b) uses a pre-trained DenseNet121 on an ImageNet data set as a backbone network. Further, the convolutional layer in the step c} has a 2x2 convolution kernel and a step of 2. Further, the deconvolutional layer in the step d) is a 3x3 deconvolutional layer with a step of 2. Further, the step e) includes the following steps: . ope . Lp nf e-1) calculating the classification loss according to a formula = 202 24D 2 — + 24 : - L,=> 1, max (0,m ~e; |) +4-(1-3,): max(0|c} |-m ) | n=1 , ‚ =] where Vo represents a person class to which an input image belongs, Vn when y, =0 the image belongs to a person #2, © when the image does not belong to the ne|lLN = + 7 person 7, | ’ | A represents a weight, A=0.5 ‚ M and M are set ‚ N-1 _ Ì m —-— m —— boundaries, N , and N ; -5- 5 Linton . e-2} calculating the triplet loss Pel according to a formula Shy 0 0 0 _ 40) _ i {) ‘ i J Loma ==2 2 le + max [£1 = £0] = min [£7 = £7] 1. i=l a=l p=. © Tp ja ‚ where Ja represents a feature extracted from an anchor image, “7 represents a feature extracted from a positive image, Js represents a feature extracted from a negative 5 image, & represents a hyper-parameter of a boundary, P represents a quantity of training classes per batch, and K represents a quantity of images in each class; and : . L art : e-3) using a cross-entropy loss function for the loss 7%" of part semantic segmentation. Further, the step f) includes: extracting, by the optimized deep learning model, feature vector representations of all images in the test set; calculating, according to a da, =| - 1, | : ‘ d, 1 : ee formula , a Euclidean distance ‘2 representing a similarity we I, I, Ji between a given image ~' of the person and an image in the test set, where ’! I, represents a feature vector, obtained through forward network prorogation, of , Ji represents a feature vector, obtained through forward network prorogation, of 7 . . . . , 2: and sorting the images in the test set in descending order based on the calculated similarity, and returning a retrieval result list based on a sorting result. The present disclosure has the following beneficial effects: In a scenario in which a person changes clothing, the clothing desensitization network is used to learn a generalized appearance feature of the person, so that the model can identify the person without relying on the appearance features such as clothing color and texture. The loss is calculated based on the pre-generated part semantic segmentation image and the feature map to assist feature alignment. This can avoid similarity measurement on a background and a half of the body to a greatest extent, and -6- eliminate negative optimization. According to the method in the present disclosure, a coarse-to-fine grained multi-level training method is used for training. In this way, more effective attribute information can be extracted compared with attribute information extracted based on a single global feature. The multi-granularity clothing changing-specific person RelD method based on a clothing desensitization network in the present disclosure achieves an excellent effect in relevant data sets for clothing changing—specific ReiD. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flowchart of the present disclosure; and FIG. 2 is a framework diagram of a clothing desensitization network according to the present disclosure. DETAILED DESCRIPTION OF THE EMBODIMENTS The present disclosure is further described with reference to FIG. 1 and FIG. 2. A multi-granularity clothing changing—specific person RelD method based on a clothing desensitization network includes the following steps: a} Semantic segmentation is performed on a pre-obtained RGB image by using a semantic segmentation algorithm, to obtain a part semantic segmentation image of the corresponding RGB image, and the obtained semantic segmentation image is input to a network as a label to perform supervised training. b) To avoid adverse impact caused when an existing method cannot shield clothing or appearance in a process of extracting features of a person, the present disclosure provides a new module for extracting a generalized feature of a person based on a clothing desensitization network. Specifically, the pre-obtained RGB image is first input to a CNN for processing. A processed RGB image is represented by a 7x7 feature map, there are 1024 feature channels, and the feature map includes all feature information including clothing and appearance of a person. ¢) The following operations are performed: performing a convolution operation on the feature map by using eight parallel convolutional layers with different weights; performing dimension reduction and feature extraction on the feature map in space and channels to obtain eight high-dimensional feature maps with different results; -7- connecting corresponding channels of the eight high-dimensional feature maps by using a reshape function, to obtain 288 8-dimensional vectors to represent a single attribute of the person; normalizing a length of each vector by using a nonlinear „2 vsp _ i) . vn tefl [)| squashing function and according to a formula , where ve represents the k th 8-dimensional vector, and ke [1.288]. multiplying the 288 8- 2 8D dimensional vectors by a weight matrix Wi according to a formula ve = We Vi 241 2 to obtain 1024 24-dimensional vectors vi , where wi, EL > ‚ and 0 represents real number space; and performing coupled calculation on the 24- 288 20 cP _ > Ti vp dimensional vector according to a formula k=1 , and performing corresponding vector representation for each person class based on an ce obtained calculation result ” , where a number of each person class is N. ne[LN] a 0e coeffie ‚and represents a coupling coefficient. d) Due to stance and camera angle changes, persons are not aligned in images of the persons, in other words, proportions of the persons to the whole images are inconsistent. Most of existing methods align and extract fine-grained features through horizontal striping. In this way, a background and a half of the body of a person are aligned, inevitably resulting in negative optimization. According to the present disclosure, the feature map is divided into regions. Therefore, merged features from different regions are expected to be different from each other, in other words, there are almost no redundant features. Therefore, a part-based semantic segmentation constraint is imposed on the feature map to force a model to predict a part label from the feature map. If a part semantic segmentation module can predict the part label based on the feature map, a positioning capability of the part semantic segmentation module can be well maintained. This can reduce redundancy and realize feature alignment. Specifically, the part semantic segmentation image in the -8- step a) and the feature map in the step b) are input to the part semantic segmentation module to perform feature alignment. The part semantic segmentation module includes a deconvolutional layer, a normalization layer, an activation function, and a 1x1 convolutional layer. e) In most of the existing methods, a representation learning method is used to extract an image feature, person RelD is regarded as classification, and a class label is used to calculate a classification loss. In the present disclosure, two links are set in the network to learn the feature map and a part map respectively. The feature map is used to learn a feature of the person for retrieval, and the part map is used to assist feature alignment. Therefore, a loss function is divided into two parts. One part of the loss function calculates the classification loss based on the feature map. In addition, compared with a single classification loss, a triplet loss is added as a metric loss in the present disclosure, to decrease a distance between intra-class features and increase a distance between inter-class features. The other part of the loss function calculates a loss of part semantic segmentation based on the part map. Therefore, the loss function Z is calculated according to a formula L= Aly, Ao Lp + AL pan , L Lie where TD represents the classification loss, “PE represents the triplet loss, Lp represents a loss of part semantic segmentation, and 4 , 4 , and 4 represent weights. f) A deep learning model is optimized by using the loss function L , to perform feature extraction on the person, a given image of the person is retrieved in a test set in an optimized deep learning model to obtain other images of the person, and a sorted list is returned. It is very important to optimize rankings of image retrieval results to improve retrieval performance in a test stage. For example, automatically mining a sample similarity in a retrieval database is equivalent to performing ranking optimization. A basic idea of resorting retrieval results of person RelD is to optimize initial rankings based on the sample similarity in the retrieval database. For example, in the retrieval results, a retrieval result with a higher similarity can be moved to the forepart of a -9- retrieval sequence, and a retrieval result with a lower similarity can be moved to the tail of the retrieval sequence. In addition, k-reciprocal encoding can also be introduced to mine context information to improve an initially sorted list. Because of simplicity and effectiveness, k-reciprocal encoding has been widely used in current sorting optimization algorithms. In the present disclosure, to resolve a problem caused when the person changes clothing, the clothing desensitization network is used to learn a generalized appearance feature of the person, so that the model can identify the person without relying on the appearance features such as clothing color and texture, The loss is calculated based on the pre-generated part semantic segmentation image and the feature map to assist feature alignment. This can prevent similarity measurement on a background and a half of the body to a greatest extent, and eliminate negative optimization. According to the method in the present disclosure, a coarse-to-fine grained multi-level training method is used for training. In this way, more effective attribute information can be extracted compared with attribute information extracted based on a single global feature. The multi-granularity clothing changing— specific person RelD method based on a clothing desensitization network in the present disclosure achieves an excellent effect in relevant data sets for clothing changing—specific RelD. In the step a), a DANet model is used as a segmentation algorithm model, and the DANet model is pre-trained on a COCO Densepose data set. The CNN in the step b) uses a pre-trained DenseNet121 on an ImageNet data set as a backbone network, to effectively extract a low-dimensional image feature. The convolutional layer in the step c} has a 2x2 convolution kernel and a step of 2. The deconvolutional layer in the step d) is a 3x3 deconvolutional layer with a step of 2. The step e) includes the following steps: -10- . ope . Ln . e-1) calculating the classification loss > according to a formula = 202 24D 2 — + 24 : - L,=> 1, max (0,m ~e; |) +4-(1-3,): max(0|c} |-m ) | n=1 , ‚ =] where Vo represents a person class to which an input image belongs, Vn when y, =0 the image belongs to a person #2, © when the image does not belong to the ne|lLN . = + - person 1, | ’ | A represents a weight, A=0.5 , Mand M are set + N-1 21 m ——- m —— boundaries, N , and N ; . . Lipter . e-2) calculating the triplet loss Pel according to a formula CS 0 _ /0 0 _ 0 == i i : i J Lyi = 2 [e+ max | £0 = 79 — min | 7 - £1] 1, - p=L..K 2 n=L..K 2 i=l a=1 j=L.P f I , where 79 represents a feature extracted from an anchor image, Jy represents a feature extracted from a positive image, Js represents a feature extracted from a negative image, a represents a hyper-parameter of a boundary, © represents a quantity of training classes per batch, K represents a quantity of images in each class, and such an improved triplet loss enhances robustness of metric learning and further improves performance; and e-3) calculating the loss of part semantic segmentation through size normalization based on a part-based segmentation loss of a feature, where a cross-entropy loss . L art . . function is used for the loss 7 " of part semantic segmentation; and before cross- part averaging, all parts are averaged to avoid that the loss value is only determined by large parts, and this is very important for small parts such as feet, because these small parts also contain a lot of fine-grained distinctive information, and play an egually important role. -11- The step f) includes: extracting, by the optimized deep learning model, feature vector representations of all images in the test set; calculating, according to a formula ys, == 11] d 11 I . . . a 2 ! 212 3 Euclidean distance +2 representing a similarity between a Co I, I, i given image ! of the person and an image in the test set, where “1 represents a IA feature vector, obtained through forward network prorogation, of 1, °° represents : 1, a feature vector, obtained through forward network prorogation, of 2; and sorting the images in the test set in descending order based on the calculated similarity, and returning a retrieval result list based on a sorting result. Finally, it should be noted that the above descriptions are only preferred embodiments of the present disclosure and are not intended to limit the present disclosure. Although the present disclosure is described in detail with reference to the foregoing embodiments, a person skilled in the art can stil] make modifications to the technical solutions described in the foregoing embodiments, or make equivalent replacement of some technical features therein. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present disclosure should be included within the protection scope of the present disclosure.
权利要求:
Claims (7) [1] 1. Multi-granularity clothing change-specific person re-identification ("re-identification", ReID) method based on a clothing desensitization network, comprising the following steps: a) performing semantic segmentation on a pre-acquired red-green-blue ("red green blue”, RGB) image using a semantic segmentation algorithm, to obtain a partial semantic segmentation image of the corresponding RGB image, and entering the obtained semantic segmentation image into a network as a label to perform guided training feed; b) inputting the pre-acquired RGB image into a convolutional neural network (“convolutional neural network”, CNN) for processing, wherein a processed RGB image is represented by a 7x7 feature map, and 1024 are property channels; c) performing a convolution operation on the property map using eight parallel convolutional layers of different weights; performing dimension reduction and property extraction on the property map in space and channels to obtain eight high-dimensional property maps with different results; connecting corresponding channels of the eight high-dimensional attribute maps using a reshape function to obtain 288 8-dimensional vectors to represent a single attribute of a person; normalizing a length of each vector using a nonlinear flattening function on _ Ca | vf 8D the 4 8-dimensions according to a formula vj ==, where v;” the A°® represents 8-dimensional 1+[cattle vector, and k € [1, 288]; multiplying the 288 8-dimensional vectors with a weight matrix wy according to a formula v2* = we :v2 to obtain 1024 24-dimensional vectors vZ*P, where wy € [[***® | and [] represents a real number space, and performing a coupled computation on the 24-dimensional vector vi according to a formula C2%P = 3282 ut -v2%P, and performing corresponding vector representation for each person class based on a obtained calculation result C2° , where a quantity of classes of each person is N, where n € [1, N] and onion represents a coupling coefficient; S13 - d) inputting the partial semantic segmentation map in step a) and the property map in step b) into a partial semantic segmentation module to perform property alignment, wherein the partial semantic segmentation module has a deconvolutional layer, a normalization layer, an activation function, and an Ixl convolutional layer includes; e) calculating a loss function L according to a formula L = A Lp + AaLtriptet + AsLpart, where Ly, represents a classification loss, Lippe represents a triplet loss, Lpart represents a loss of partial semantic segmentation and 44, A, and A; represent weights, and f) optimizing a deep learning model using the loss function L to perform trait extraction on the person, retrieving a given image of the person in test set in an optimized deep learning model to obtain other images of the person, and returning a sorted list. [2] A multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim 1, wherein, in the step a), a dual attention network (DANet) model is used as a segmentation algorithm model, and where the DANet model is pre-trained on a COCO-Densepose dataset. [3] The multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim 1, wherein the CNN in the step b) uses a pre-trained DenseNet121 on an ImageNet dataset as a backbone network. [4] The multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim L wherein the convolutional layer in the step c) has a 2x2 convolutional kernel and a step of 2. [5] A multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim - 14-1, wherein the deconvolutional layer in the step d) is a 3x3 deconvolutional layer with a step of 2. [6] A multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim 1, wherein the step e) comprises the following steps: e-1) calculating the classification loss L i , according to a formula L;p = Zi=10n" max(0,m* — [leg] +4 (1 — yp) - max (0, [lef] —m™) }, where Vn represents a person class to which an input image belongs, where Va = 1 if the picture belongs to person n, where y, = 0 if the picture does not belong to person n, where n € [1, N], where A represents a weight, where A = 0.5, where m *and m° are set limits, where mt = ten where m7 = =; N N e-2) calculating the triplet loss Lippi, according to a formula Lrriptet = 5 ZL Eile + max | -(5| — minus [2° £7] 1e > where faoen j=1.p Jai represents property extracted from an anchor image, where f, represents a property extracted from a positive image, where f, , represents a property extracted from a negative map, where a represents a hyperparameter of a boundary, where P represents a number of training classes per batch, and K represents a number of images in each class; and e-3) using a cross-entropy loss function for the loss L‚art of partial semantic segmentation. [7] The multi-granularity clothing change specific person re-identification (ReID) method based on a clothing desensitization network according to claim 1, wherein the step f) comprises: extracting, by the optimized deep learning model, feature vector representations of all images in the test set; it, according to a formula d;, ;, = | fi, = fi, | calculating a Euclidian distance d,, ;, which is an equality between a given map /; by - 15 - represents the person and an image I, in the test set, where f; represents a feature vector obtained by forward network propagation of I; , where fr 1 represents a feature vector obtained by forward network propagation of I 1 ; and sorting the images in the test set in descending order based on the calculated similarity, and returning a retrieval result list based on a sorting result.
类似技术:
公开号 | 公开日 | 专利标题 Dong et al.2016|Automatic age estimation based on deep learning algorithm Oquab et al.2014|Learning and transferring mid-level image representations using convolutional neural networks CN108921054B|2021-08-03|Pedestrian multi-attribute identification method based on semantic segmentation Littlewort et al.2003|Towards social robots: Automatic evaluation of human-robot interaction by facial expression classification CN107766850A|2018-03-06|Based on the face identification method for combining face character information Guo et al.2017|Human attribute recognition by refining attention heat map NL2028095A|2021-08-11|Multi—granularity clothing changing—specific person re—identification | method based on clothing desensitization network CN111353411A|2020-06-30|Face-shielding identification method based on joint loss function CN111652236A|2020-09-11|Lightweight fine-grained image identification method for cross-layer feature interaction in weak supervision scene CN110188653A|2019-08-30|Activity recognition method based on local feature polymerization coding and shot and long term memory network CN110059217A|2019-07-26|A kind of image text cross-media retrieval method of two-level network CN111709311A|2020-09-25|Pedestrian re-identification method based on multi-scale convolution feature fusion Tang et al.2020|Parallel ensemble learning of convolutional neural networks and local binary patterns for face recognition Safaei et al.2019|Still image action recognition by predicting spatial-temporal pixel evolution CN111414862A|2020-07-14|Expression recognition method based on neural network fusion key point angle change CN110532900B|2021-07-27|Facial expression recognition method based on U-Net and LS-CNN CN111460980A|2020-07-28|Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion CN111274922A|2020-06-12|Pedestrian re-identification method and system based on multi-level deep learning network Wang et al.2018|Pedestrian recognition in multi-camera networks based on deep transfer learning and feature visualization Prihasto et al.2016|A survey of deep face recognition in the wild CN112949740A|2021-06-11|Small sample image classification method based on multilevel measurement CN111639544A|2020-09-08|Expression recognition method based on multi-branch cross-connection convolutional neural network Chen et al.2018|Joint Bayesian guided metric learning for end-to-end face verification CN111832511A|2020-10-27|Unsupervised pedestrian re-identification method for enhancing sample data Luo et al.2003|Natural object detection in outdoor scenes based on probabilistic spatial context models
同族专利:
公开号 | 公开日 CN112784728A|2021-05-11|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 CN113822236A|2021-11-22|2021-12-21|杭州云栖智慧视通科技有限公司|Jacket color replacement method based on human semantic component|
法律状态:
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 CN202110065609.7A|CN112784728A|2021-01-18|2021-01-18|Multi-granularity clothes changing pedestrian re-identification method based on clothing desensitization network| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|